16 research outputs found

    A Connectionist Theory of Phenomenal Experience

    Get PDF
    When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as many of them have been doing recently, there are two fundamentally distinct approaches available. Either consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys; or it is to be explained in terms of the computational processes defined over these vehicles. We call versions of these two approaches vehicle and process theories of consciousness, respectively. However, while there may be space for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because of the influence exerted, on the one hand, by a large body of research which purports to show that the explicit representation of information in the brain and conscious experience are dissociable, and on the other, by the classical computational theory of mind – the theory that takes human cognition to be a species of symbol manipulation. But two recent developments in cognitive science combine to suggest that a reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the experimental methodologies employed in the dissociation studies – so critical, in fact, it’s no longer reasonable to assume that the dissociability of conscious experience and explicit representation has been adequately demonstrated. Second, classicism, as a theory of human cognition, is no longer as dominant in cognitive science as it once was. It now has a lively competitor in the form of connectionism; and connectionism, unlike classicism, does have the computational resources to support a robust vehicle theory of consciousness. In this paper we develop and defend this connectionist vehicle theory of consciousness. It takes the form of the following simple empirical hypothesis: phenomenal experience consists in the explicit representation of information in neurally realized PDP networks. This hypothesis leads us to re-assess some common wisdom about consciousness, but, we will argue, in fruitful and ultimately plausible ways

    A Defence of Cartesian Materialism

    Get PDF
    One of the principal tasks Dennett sets himself in "Consciousness Explained" is to demolish the Cartesian theatre model of phenomenal consciousness, which in its contemporary garb takes the form of Cartesian materialism: the idea that conscious experience is a process of presentation realized in the physical materials of the brain. The now standard response to Dennett is that, in focusing on Cartesian materialism, he attacks an impossibly naive account of consciousness held by no one currently working in cognitive science or the philosophy of mind. Our response is quite different. We believe that, once properly formulated, Cartesian materialism is no straw man. Rather, it is an attractive hypothesis about the relationship between the computational architecture of the brain and phenomenal consciousness, and hence one that is worthy of further exploration. Consequently, our primary aim in this paper is to defend Cartesian materialism from Dennett's assault. We do this by showing that Dennett's argument against this position is founded on an implicit assumption (about the relationship between phenomenal experience and information coding in the brain), which while valid in the context of classical cognitive science, is not forced on connectionism

    The Disunity of Consciousness

    Get PDF
    It is commonplace for both philosophers and cognitive scientists to express their allegiance to the "unity of consciousness". This is the claim that a subjectÂ’s phenomenal consciousness, at any one moment in time, is a single thing. This view has had a major influence on computational theories of consciousness. In particular, what we call single-track theories dominate the literature, theories which contend that our conscious experience is the result of a single consciousness-making process or mechanism in the brain. We argue that the orthodox view is quite wrong: phenomenal experience is not a unity, in the sense of being a single thing at each instant. It is a multiplicity, an aggregate of phenomenal elements, each of which is the product of a distinct consciousness-making mechanism in the brain. Consequently, cognitive science is in need of a multi-track theory of consciousness; a computational model that acknowledges both the manifold nature of experience, and its distributed neural basis

    The Multiplicity of Consciousness and the Emergence of the Self

    No full text
    Schizophrenia is a complex and heterogeneous disease, incorporating at least three distinct subsyndromes: psycho-motor poverty (poverty of speech, lack of spontaneous movement, blunting of affect), disorganisation (inappropriate affect, disturbances of the form of thought), and reality distortion (Liddle 1987, Johnstone 1991). The reality distortion syndrome encompasses the s

    The role of representation in computation

    No full text
    Reformers urge that representation no longer earns its explanatory keep in cognitive science, and that it is time to discard this troublesome concept. In contrast, we hold that without representation cognitive science is utterly bereft of tools for explaining natural intelligence. In order to defend the latter position, we focus on the explanatory role of representation in computation. We examine how the methods of digital and analog computation are used to model a relatively simple target system, and show that representation plays an in-eliminable explanatory role in both cases. We conclude that, to the extent that biologic systems engage in computation, representation is destined to play an explanatory role in cognitive science.Gerard O’Brien, Jon Opi

    How do connectionist networks compute?

    No full text
    Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its computational credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. Next we examine what might be regarded as the “conventional” account of connectionist computation. We show why this account is inadequate and hence fosters the suspicion that connectionist networks are not genuinely computational. Lastly, we turn to the principal task of the paper: the development of a more robust portrait of connectionist computation. The basis of this portrait is an explanation of the representational capacities of connection weights, supported by an analysis of the weight configurations of a series of simulated neural networks.Gerard O’Brien and Jon Opi
    corecore